Web Survey Bibliography
Title Who fails and who passes instructed response item attention checks in web surveys?
Author Gummer, T.; Rossman, J.; Silber, H.
Year 2017
Access date 15.09.2017
Abstract Providing high-quality answers requires respondents to devote their attention to completing the questionnaire and, thus, thoroughly assess each question. This is particularly challenging in web surveys, which lack the presence of interviewers who can assess how carefully respondents answer the questions and motivate them to be more attentive if necessary. Inattentiveness can provoke response behavior that is commonly associated with measurement and nonresponse error by only superficially comprehending the question, retrieving semi- or irrelevant information, not properly forming a judgement, or failing in mapping a judgement to the available response options. Consequently, attention checks such as Instructed Response Items (IRI) have been proposed to identify inattentive respondents. An IRI is included as one item in a grid and instructs the respondents to mark a specific response category (e.g., “click strongly agree”). The instruction is not incorporated into the question text but is placed like a label of an item. The present study is focused on IRI attention checks as these (i) are easy to create and implement in a survey, (ii) do not need too much space in a questionnaire (i.e., one item in a grid), (iii) provide a distinct measure of failing or passing the attention check, (iv) are not cognitively demanding, and (v) –most importantly–provide a measure of how thoroughly respondents read items of a grid.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Most of the literature on attention checks has focused on the consistency of some “key” constructs so that IRIs typically serve as a local measure of inattentiveness for the grid in which they are incorporated (e.g., Berinsky, Margolis and Sances 2014, Oppenheimer et al. 2009). This body of research focuses heavily on how the consistency of these key constructs can be improved by relying on the measure of these attention checks, for instance, by deleting “inattentive” respondents. In the present study, we further extend the research on attention checks by addressing the research question on which respondents fail an IRI and, thus, show questionable response behavior.
To answer this research question, we draw on a web-based panel survey with seven waves that was conducted between June and October 2013 in Germany. In each wave of the panel, an IRI attention check was implemented in a grid question with a five-point scale. Across waves, the proportion failing the IRIs varied between 6.1% and 15.7%. Based on these data, logistic hybrid panel regression was used to investigate the effects of time-invariant (e.g., sex, age, education) and time-varying (e.g., interest in survey topic, respondent motivation) factors on the likelihood of failing an IRI. Consequently, the results of our study will provide additional insights in who shows questionable response behavior in web surveys. Moreover, our methodological approach allows for a finer grained discussion on whether this response behavior is the result of rather static respondent characteristics or if it is subject to change.
Access/Direct link Conference Homepage (abstract) / (presentation)
Year of publication2017
Bibliographic typeConferences, workshops, tutorials, presentations
Web survey bibliography (4086)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Using experts’ consensus (the Delphi method) to evaluate weighting techniques in web surveys not...; 2017; Toepoel, V.; Emerson, H.
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Answering Without Reading: IMCs and Strong Satisficing in Online Surveys; 2017; Anduiza, E.; Galais, C.
- Ideal and maximum length for a web survey; 2017; Revilla, M.; Ochoa, C.
- Social desirability bias in self-reported well-being measures: evidence from an online survey; 2017; Caputo, A.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Web Survey Gamification - Increasing Data Quality in Web Surveys by Using Game Design Elements; 2017; Schacht, S.; Keusch, F.; Bergmann, N.; Morana, S.
- Effects of sampling procedure on data quality in a web survey; 2017; Rimac, I.; Ogresta, J.
- Comparability of web and telephone surveys for the measurement of subjective well-being; 2017; Sarracino, F.; Riillo, C. F. A.; Mikucka, M.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- A Meta-Analysis of the Effects of Incentives on Response Rate in Online Survey Studies; 2017; Mohammad Asire, A.
- Telephone versus Online Survey Modes for Election Studies: Comparing Canadian Public Opinion and Vote...; 2017; Breton, C.; Cutler, F.; Lachance, S.; Mierke-Zatwarnicki, A.
- Examining Factors Impacting Online Survey Response Ratesin Educational Research: Perceptions of Graduate...; 2017; Saleh, A.; Bista, K.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Paradata as an aide to questionnaire design: Improving quality and reducing burden; 2017; Timm, E.; Stewart, J.; Sidney, I.
- Fieldwork monitoring and managing with time-related paradata; 2017; Vandenplas, C.
- Interviewer effects on onliner and offliner participation in the German Internet Panel; 2017; Herzing, J. M. E.; Blom, A. G.; Meuleman, B.
- Interviewer Gender and Survey Responses: The Effects of Humanizing Cues Variations; 2017; Jablonski, W.; Krzewinska, A.; Grzeszkiewicz-Radulska, K.
- Millennials and emojis in Spain and Mexico.; 2017; Bosch Jover, O.; Revilla, M.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Comparing the same Questionnaire between five Online Panels: A Study of the Effect of Recruitment Strategy...; 2017; Schnell, R.; Panreck, L.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- Humanizing Cues in Internet Surveys: Investigating Respondent Cognitive Processes; 2017; Jablonski, W.; Grzeszkiewicz-Radulska, K.; Krzewinska, A.
- A Comparison of Emerging Pretesting Methods for Evaluating “Modern” Surveys; 2017; Geisen, E., Murphy, J.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- Pushing to web in the ISSP; 2017; Jonsdottir, G. A.; Dofradottir, A. G.; Einarsson, H. B.
- The 2016 Canadian Census: An Innovative Wave Collection Methodology to Maximize Self-Response and Internet...; 2017; Mathieu, P.
- Push2web or less is more? Experimental evidence from a mixed-mode population survey at the community...; 2017; Neumann, R.; Haeder, M.; Brust, O.; Dittrich, E.; von Hermanni, H.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Redirected Inbound Call Sampling (RICS); A New Methodology ; 2017; Krotki, K.; Bobashev, G.; Levine, B.; Richards, S.
- An Empirical Process for Using Non-probability Survey for Inference; 2017; Tortora, R.; Iachan, R.
- The perils of non-probability sampling; 2017; Bethlehem, J.
- A Comparison of Two Nonprobability Samples with Probability Samples; 2017; Zack, E. S.; Kennedy, J. M.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Necessary but Insufficient: Why Measurement Invariance Tests Need Online Probing as a Complementary...; 2017; Meitinger, K.
- Nonresponse in Organizational Surveying: Attitudinal Distribution Form and Conditional Response Probabilities...; 2017; Kulas, J. T.; Robinson, D. H.; Kellar, D. Z.; Smith, J. A.
- Theory and Practice in Nonprobability Surveys: Parallels between Causal Inference and Survey Inference...; 2017; Mercer, A. W.; Kreuter, F.; Keeter, S.; Stuart, E. A.
- Is There a Future for Surveys; 2017; Miller, P. V.
- Reducing speeding in web surveys by providing immediate feedback; 2017; Conrad, F.; Tourangeau, R.; Couper, M. P.; Zhang, C.
- Social Desirability and Undesirability Effects on Survey Response latencies; 2017; Andersen, H.; Mayerl, J.
- A Working Example of How to Use Artificial Intelligence To Automate and Transform Surveys Into Customer...; 2017; Neve, S.
- A Case Study on Evaluating the Relevance of Some Rules for Writing Requirements through an Online Survey...; 2017; Warnier, M.; Condamines, A.
- Estimating the Impact of Measurement Differences Introduced by Efforts to Reach a Balanced Response...; 2017; Kappelhof, J. W. S.; De Leeuw, E. D.
- Targeted letters: Effects on sample composition and item non-response; 2017; Bianchi, A.; Biffignandi, S.